22 research outputs found

    Image enhancement from a stabilised video sequence

    Get PDF
    The aim of video stabilisation is to create a new video sequence where the motions (i.e. rotations, translations) and scale differences between frames (or parts of a frame) have effectively been removed. These stabilisation effects can be obtained via digital video processing techniques which use the information extracted from the video sequence itself, with no need for additional hardware or knowledge about camera physical motion. A video sequence usually contains a large overlap between successive frames, and regions of the same scene are sampled at different positions. In this paper, this multiple sampling is combined to achieve images with a higher spatial resolution. Higher resolution imagery play an important role in assisting in the identification of people, vehicles, structures or objects of interest captured by surveillance cameras or by video cameras used in face recognition, traffic monitoring, traffic law reinforcement, driver assistance and automatic vehicle guidance systems

    An application of the least squares plane fitting interpolation process to image reconstruction and enhancement

    Get PDF
    This work applies a least squares plane fitting (LSP) method as an alternative way of interpolating irregularly spaced pixel intensity values that are suitable for image reconstruction of a static scene via super-resolution (SR). SR is a term used within the computer vision and image processing community to describe the process of reconstructing a high resolution image from a sequence of several shifted images covering the same scene.   The accuracy attainable by this process is estimated via tests where the simulation parameters are controlled and where the reconstructed high resolution image can be compared with its original. In these tests the original image is scanned randomly so as to create a sequence of low-resolution and JPEG compressed shifted images. The comparison is based on the r.m.s.e. of the differences between the reconstructed image and the original

    Digital image blurring-deblurring process for improved storage and transmission

    Get PDF
    This paper investigates the likelihood of using a blurring-deblurring process as a pre-post-processing step in standard image reconstruction and compression. As such, the paper relates to image coding and compression systems whereby an original image can be transmitted or stored in a coded and compressed representation which renders it blurred and degraded. The compressibility of an image increases with the blurring, whereby the relation between compression ratio (CR) and the blurring scale is approximately linear. Hence, by pre-processing and blurring an image before compression, the CR will increase accordingly. The function or process tested here for blurring-deblurring an image is based on a pixel group processing, whereby the original image is sampled at sub-pixel levels. Since the sub-pixel shifts between each pixel group sampling are known exactly,a blurred image is created which can be shown to contain the details of the original image and thereby restored or reconstructed by reversing the blurring process.The complementary effects of increased CR are examined in terms of coding/decoding execution times and the quality of the reconstructed images

    An accuracy assessment of a GPS- enabled digital camera

    Get PDF
    As the consumer market embraces digital imaging, digital cameras are becoming less expensive while producing higher resolution images. This explosion in camera architecture combined with GPS technology and consumer grade photogrammetric software can provide for noncontact, inexpensive, safe and practical measuring systems which can be used for direct geo-referencing of points of interest and 3D modelling. This study investigates the accuracy of a measuring scheme based on a GPSenabled digital camera (Ricoh 500SE) and photogrammetry software (PhotoModeler by Eos Systems Inc). The coordinate values of selected target points determined by this close-range photogrammetric system are compared to the coordinates of the same target points computed using a surveying total station. The target points were not chosen to satisfy the photogrammetry technique but were in fact natural targets part of a noise barrier located along a busy motorway. The results of this study showed that measurements with the proposed system differed from the more precise surveying measurements by an overall positional accuracy in x, y and z of 0.5 m. This expected accuracy result was essentially a function of the accuracy of the GPS unit and when a more accurate version becomes available and is incorporated with the camera, the accuracy should reduce to that approaching a total station. The efficiency of the GPS-enabled camera system was the important point disclosed by this study

    A process for the accurate reconstruction of pre-filtered and compressed digital aerial images

    Get PDF
    The study of compression and decompression methods is crucial for storage and/or transmission of large numbers of image data which is required for archiving aerial photographs, satellite images and digital ortho-photos. Hence, the proposed work aims to increment the compression ratio (CR) of digital images in general. While emphasis is made on aerial images, the same principle may find applications to other types of raster based images. The process described here involves the application of pre-defined low-pass filters (i.e. kernels) prior to applying standard image compression encoders. Low-pass filters have the effect of increasing the dependence between neighbouring pixels which can be used to improve the CR. However, for this pre-filtering process to be considered as a compression instrument, it should allow for the original image to be accurately restored from its filtered counterpart. The development of the restoration process presented in this study is based on the theory of least squares and assumes the knowledge of the filtered image and the low-pass filter applied to the original image. The process is a variant of a super-resolution algorithm previously described, but its application and adaptation to the filtering and restoration of images, in this case (but not exclusively) aerial imagery, using a number of scales and filter dimensions is the expansion detailed here. An example of the proposed process is detailed in the ensuing sections. The example is also indicative of the degree of accuracy that can be attained upon applying this process to gray-scale images of different entropies and coded in a lossy or lossless mode

    Limits of multi-frame image enhancement: a case of super-resolution

    Get PDF
    A common and important problem that arises in visual communications is the need to create an enhanced-resolution video image sequence from a lower resolution input video stream. This can be accomplished by exploiting the spatial correlations that exist between successive video frames using Super-Resolution (SR) reconstruction. SR refers to the task of increasing the spatial resolution through multiple frame processing. Multi-frame resolution enhancement methods are of increasing interest in digital image processing and there has been a substantial amount of research in developing algorithms that combine a set of low-quality images to produce a set of higher quality images. Either explicitly or implicitly, such algorithms must perform the common task of registering and fusing the lowquality image data. While many such processes have been proposed, very little work has addressed their limits. In this context, an algorithm designed to operate in the spatial domain is used in a controlled test to compute a higher-resolution image by mapping a model of the image formation process using local sub-pixel shifts among the lower resolution and compressed images of the same scene. These shifts are determined by way of a rigorous least-squares area-based image-matching scheme that does not require control points. Statistical results show that the performance of the algorithm does degrade, as would be expected, depending on (1) the amount of noise present in the low-resolution images, (2) the number of low-resolution input images and (3) the magnification factor required to meet resolution requirements

    Mapping city environments using a single hand-held digital camera

    Get PDF
    Densely populated cities pose a problem for reliable navigation and tracking solutions using GPS. Narrow streets between high-rise buildings blocking GPS signal paths provide limited visibility to satellites and cause multipath effects, resulting in degraded navigation accuracy and reliability. In this context this paper proposes a conceptual framework for a mapping process in an attempt to survey city blocks where the only sensory input is a single low-cost digital camera. This proof-of-concept measuring process begins with taking images of at least two known geo-referenced control points and involves taking sequential images as the user walks forward around an city block. Each image is linked to previous images of the same scene taken from a previously occupied location. Bundle adjustments and registration algorithms compute the direction and orientation between sequential images that, in turn, allows the extraction of 3D information of selected target points visible in the images. Computations are carried out within an office environment using a consumer-grade photogrammetric software program. A controlled field experiment is described to demonstrate the capabilities of the system. This experiment is not intended to be comprehensive, but it gives useful results that can be taken as typical indicators of the accuracy to be expected from the proposed measuring scheme. This was ascertained by statistically evaluating and comparing the coordinates’ values obtained during the test phase and the coordinates of the same target points as computed with what was considered to be more accurate total station surveying instrumentation

    Digital image improvement by integrating images of different resolutions

    Get PDF
    This proof of concept paper evaluates the performance of super-resolution (SR) imaging when combining a sequence of images of different resolutions. Traditional work in SR requires accurate sub-pixel registration and/or alignment techniques. The assumption is that all the frames in the sequence are captured with a random sub-pixel translation and at the same spatial resolution. However, if there is no relative motion between the scene and the sensor, the super-resolution problem can be approached by integrating images of different resolutions using the image re-sampling ratios as the enhancing factor. This may be the case of using digital zooms whereby a scene of interest is enhanced by integrating the information obtained from capturing said scene at different zoom levels. Preliminary results of conducting tests on synthetic and real data (scanned images) are presented. In both cases the process under-samples the image of an object of interest within a scene at different resolutions. By integrating these images using an algebraic process an improved composite is obtained containing more spatial information than that provided by simply interpolating on a single image

    Using geospatial tools to combat fire ants

    Get PDF
    State of the art geo-spatial information capture, storage and analysis is vital in the ongoing battle against Red Imported Fire Ants (RIFA). In this context, this informative article relates to how geo-spatial tools and techniques can play an important role in identifying potential sites of fire ant infestations and, at the same time, lend a hand in modelling and predicting the spread of these invasive pests

    Compressing images using contours

    Get PDF
    A method for the vectorisation of digital images into contour maps with subsequent conversion of the contours to a pixel format is presented. This method may offer an alternative spatial image compression which is computationally inexpensive and can be directly applied to compressing certain classes of grey-scale and/or colour (RGB, 24 bits) imagery of any size. The feasibility of this study is based on research which shows that pixel based imagery can be sufficiently and accurately represented by their contour maps if a suitable contour model and scale selection method is used (Elder and Goldberg, 2001). The compression process is based on filtering and eliminating those contours that may contain redundant information. Contours extracted from digital images may contain multiple redundant data (i.e. intersecting or nested contours), any of which might logically be used as a basis for discrimination or, on the other hand, used in the reconstruction of an original captured image. As per current image compression techniques the proposed method does not require special hardware and, if combined with existing encoding schemes, it can be efficiently used for image transmission purposes due to its relatively modest storage requirements. Depending on the applications, and the amount of contour lines employed in the reconstruction of an image, the process allows for various levels of image accuracy while preserving visual integrity and reducing compression artifact
    corecore